552 research outputs found
Learning About Meetings
Most people participate in meetings almost every day, multiple times a day.
The study of meetings is important, but also challenging, as it requires an
understanding of social signals and complex interpersonal dynamics. Our aim
this work is to use a data-driven approach to the science of meetings. We
provide tentative evidence that: i) it is possible to automatically detect when
during the meeting a key decision is taking place, from analyzing only the
local dialogue acts, ii) there are common patterns in the way social dialogue
acts are interspersed throughout a meeting, iii) at the time key decisions are
made, the amount of time left in the meeting can be predicted from the amount
of time that has passed, iv) it is often possible to predict whether a proposal
during a meeting will be accepted or rejected based entirely on the language
(the set of persuasive words) used by the speaker
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
We present the Bayesian Case Model (BCM), a general framework for Bayesian
case-based reasoning (CBR) and prototype classification and clustering. BCM
brings the intuitive power of CBR to a Bayesian generative framework. The BCM
learns prototypes, the "quintessential" observations that best represent
clusters in a dataset, by performing joint inference on cluster labels,
prototypes and important features. Simultaneously, BCM pursues sparsity by
learning subspaces, the sets of features that play important roles in the
characterization of the prototypes. The prototype and subspace representation
provides quantitative benefits in interpretability while preserving
classification accuracy. Human subject experiments verify statistically
significant improvements to participants' understanding when using explanations
produced by BCM, compared to those given by prior art.Comment: Published in Neural Information Processing Systems (NIPS) 2014,
Neural Information Processing Systems (NIPS) 201
Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior
We aim to reduce the burden of programming and deploying autonomous systems
to work in concert with people in time-critical domains, such as military field
operations and disaster response. Deployment plans for these operations are
frequently negotiated on-the-fly by teams of human planners. A human operator
then translates the agreed upon plan into machine instructions for the robots.
We present an algorithm that reduces this translation burden by inferring the
final plan from a processed form of the human team's planning conversation. Our
approach combines probabilistic generative modeling with logical plan
validation used to compute a highly structured prior over possible plans. This
hybrid approach enables us to overcome the challenge of performing inference
over the large solution space with only a small amount of noisy data from the
team planning session. We validate the algorithm through human subject
experimentation and show we are able to infer a human team's final plan with
83% accuracy on average. We also describe a robot demonstration in which two
people plan and execute a first-response collaborative task with a PR2 robot.
To the best of our knowledge, this is the first work that integrates a logical
planning technique within a generative model to perform plan inference.Comment: Appears in Proceedings of the Twenty-Seventh AAAI Conference on
Artificial Intelligence (AAAI-13
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
Post-hoc explanations of machine learning models are crucial for people to
understand and act on algorithmic predictions. An intriguing class of
explanations is through counterfactuals, hypothetical examples that show people
how to obtain a different prediction. We posit that effective counterfactual
explanations should satisfy two properties: feasibility of the counterfactual
actions given user context and constraints, and diversity among the
counterfactuals presented. To this end, we propose a framework for generating
and evaluating a diverse set of counterfactual explanations based on
determinantal point processes. To evaluate the actionability of
counterfactuals, we provide metrics that enable comparison of
counterfactual-based methods to other local explanation methods. We further
address necessary tradeoffs and point to causal implications in optimizing for
counterfactuals. Our experiments on four real-world datasets show that our
framework can generate a set of counterfactuals that are diverse and well
approximate local decision boundaries, outperforming prior approaches to
generating diverse counterfactuals. We provide an implementation of the
framework at https://github.com/microsoft/DiCE.Comment: 13 page
Latent Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
We present a general framework for Bayesian case-based reasoning and prototype classification and clustering -- Latent Case Model (LCM). LCM learns the most representative prototype observations of a dataset by performing joint inference on cluster prototypes and features. Simultaneously, LCM pursues sparsity by learning subspaces, the sets of few features that play important roles in characterizing the prototypes. The prototype and subspace representation preserves interpretability in high dimensional data. We validate the approach preserves classification accuracy on standard data sets, and verify through human subject experiments that the output of LCM produces statistically significant improvements in participants' performance on a task requiring an understanding of clusters within a dataset
- …